‘R’ skills:
ggplot2, plotly and scales library allows us to build complex visualizations that will aid the generation of further insights. psych library helps us to explore the interactions among data through scatter plots and histograms.
QRM estimates Student-t and generalized pareto distribution (GPD) simulation.
quantreg to estimation and inference methods for models of conditional quantiles
flexdashboard to publish a group of related data visualizations as a dashboard. It allows for storyboard layouts for presenting sequences of visualizations and related commentary.
shiny to create interactive visualizations of data.
tidyr helps for data tidying to help clean and reshape data for analysis
zoo is used to help with irregular time series of numeric vectors/matrices and factors
psych library helps us to explore the interactions among data through scatter plots and histograms.
Function explanations:
read.csv(): reads in the comma separated file with data for nickel, copper, and aluminum
na.omit(): returns a data table with just the rows where the specified columns have no missing value in any of them.
as.matrix(): coerces to a matrix object
ifelse(): returns a value with the same shape as the vector it is used on filled with elements depending on whether the test condition is TRUE or FALSE. Was used to add in a direction column for whether the return is positive or negative.
diff(): returns suitably lagged and iterated differences
log(): computes logarithms, by default natural logarithms
abs(): returns absolute values
colnames(): sets the column names of a matrix-like object
paste(): concatenates vectors after converting to character
as.Date(): converts between character representations and objects of class “Date” representing calendar dates
cbind(): takes a vector, matrix, or dataframe and combines by columns
as.character(): create or test for objects of type “character”
data.frame(): creates data frames with tightly coupled collections of variables
as.zooreg(): a subclass of “zoo” that is used to represent both weakly and strictly regular series.
data_moments(): using the “moments” and “matrixStats” libraries it creates a dataframe with a set of common data summarization methods (means, median, standard deviation, IQR, skewness, and kurtosis)
quantile(): produces sample quantiles corresponding to the given probabilities
subset(): returns a subset of the returns (R) dataframe with nickel, copper, and aluminum returns where nickel is greater than the 95th quantile of nickel
apply(): returns a vector or list of values by applying a function to an array or matrix. Used to get the column means of the subsetted R dataframe.
matrix(): used to make an empty matrix with 300 rows and 3 columns and uses a for loop to populate the matrix with the solution of solve.QP
solve.QP(): implements the dual method of Goldfarb and Idnani for solving quadratic programming problems
chr [1:1297] "3/15/2017" "3/14/2017" "3/13/2017" "3/10/2017" ...
| mean | median | std_dev | IQR | skewness | kurtosis | |
|---|---|---|---|---|---|---|
| nickel | 0.0489 | 0.0928 | 1.7072 | 2.0479 | 0.1932 | 5.2688 |
| copper | 0.0200 | 0.0595 | 1.1815 | 1.3508 | -0.2010 | 4.8894 |
| aluminium | 0.0149 | 0.0000 | 1.2080 | 1.0672 | -0.1557 | 6.3417 |
| mean | median | std_dev | IQR | skewness | kurtosis | |
|---|---|---|---|---|---|---|
| actual | 2.8167 | 2.8440 | 0.0828 | 0.0696 | -2.882 | 15.548 |
| predicted | 3.1322 | 3.0156 | 0.5703 | 0.6607 | 1.367 | 5.662 |
| residuals | -0.3155 | -0.2036 | 0.5626 | 0.6541 | -1.325 | 5.541 |
1. How would the performance of these commodities affect the size and timing of shipping arrangements?
The value at risk of the shipment mix would be matched to the market performance ergo value of the commodities. The timing of the shipment may directly correlate to the proportionate projected volatility of the shipment mix along with the projected returns versus the risk. Should that shipment contain our discovered optimal mix of all three metals, for example, shipping near a seasonal or cyclical peak of Copper, Nickel, or Aluminum would optimize shipment value while remaining diversified. It may even be necessary to inverse the proportions of shipment mix to their market performance in order to mitigate risk. In that case, more shipments would be necessary to maintain a lower value at risk while also capitalizing on pricing. The mean and standard deviations of the metals and the mixes, which could really be considered a metal index when combined, would give us a way to average out our returns and losses creating more diversity in the market holdings as well as optimize the mix we choose to purchase with our 250 million in allocated dollars.
2. How would the value of new shipping arrangements affect the value of our business with our current customers?
Understandably, the ship owners would charge a proportionate or perhaps arbitrary premium for peak-pricing shipments, as there is literally more at stake - unless this sort of market timing shipment agreement was stated in the T&C or manifest. The manufacturers would have arranged for a forwarded price in comparison to market value, so they may continue to do so paying larger strike prices, but any shipments during a commodity market cycle may be an impetus for increased forwards activity. The traders themselves may react to increased prices with additional volumes, which tend to magnify the effect of gains or losses their brokerage fees should not change, as it is volumes that tend to benefit traders and market makers. Additionally they may be willing to pay premiums for rush shipments in order to capitalize on current or near-future pricing.
3. How would we manage the allocation of existing resources given we have just landed in this new market?
We have 250 million dollars allocated and would need to determine which mix of metals we should purchase in order to maximize our returns while maintaining risk within our acceptable limit. We would use a mix of more stable metals at a higher quantity in order to minimize our volatility and exposure in the market. We also use the Sharpe ratio in order to determine our risk compared to the index of the total metals market data. The key also would be to keep the recommended amount of cash to help offset the risk in value per our risky portfolio mix. If we do not keep this amount of cash recommended then the portfolio will be too risky per our model and not be as diversified as needed in this problem. This will act as the risk free amount of money we can use for covering losses. Initially we may use derivatives (primarily options) trading in order to produce some cash flows while we attempt to place our model against current market conditions. This also gives us a chance to minimize market risk, while also understanding optimal market timing/cycle. Should our model work with the macroeconomic trends, we could then implement the shipment mix value and size while constantly updating our databases and model. Our actual vs. predicted Sharpe ratios will give us an idea of how well the returns matched our risk exposure compared to the market index. We should go for the highest Sharpe Ratio when purchasing the metals and commodities, which is a good indication of returns minus the risk free rate. If our Sharpe ratio is better than the common index, we should also be accepting more risk for loss within our portfolio. The actual vs the predicted plots show us that the value showed to be under 3 for all Shape Ratios and that the majority mix was within a quantity of 2 to 4 when we optimize the portfolio with the three metals (copper, nickel, and aluminum) in our Mu.P and sigma.P graphs that are able to be manipulated with sliders.
It would also be wise to maintain levels of liquidity (say 5%), should there be externalities not captured in our model.
Bonus: We discovered this piece of information while reviewing the project on https://wgfoote.shinyapps.io/extreme-4/, we did not include the answer because the codes for the weight of Markowits/QR tangency portfolio were not provided in the original homework.